不确定性在游戏中无处不在,无论是在玩游戏的代理商还是在游戏本身中。因此,不确定性是成功深入强化学习剂的重要组成部分。尽管在理解和处理监督学习的不确定性方面已经做出了巨大的努力和进展,但不确定性的文献意识到深度强化学习的发展却较少。尽管有关监督学习的神经网络中的不确定性的许多相同问题仍然用于强化学习,但由于可相互作用的环境的性质,还有其他不确定性来源。在这项工作中,我们提供了一个激励和介绍不确定性深入强化学习的现有技术的概述。这些作品在各种强化学习任务上显示出经验益处。这项工作有助于集中不同的结果并促进该领域的未来研究。
translated by 谷歌翻译
量子机学习(QML)被认为是近术语量子设备最有前途的应用之一。然而,量子机器学习模型的优化呈现出众多挑战,从硬件的缺陷和导航指数上缩放的希尔伯特空间中的缺陷产生了巨大的挑战。在这项工作中,我们评估了深度增强学习中的当代方法的潜力,以增加量子变分电路中的增强基于梯度的优化例程。我们发现强化学习增强了优化器,始终突出噪声环境中的渐变血统。所有代码和备用重量都可用于复制结果或在https://github.com/lockwo/rl_qvc_opt上部署模型。
translated by 谷歌翻译
With the rise in high resolution remote sensing technologies there has been an explosion in the amount of data available for forest monitoring, and an accompanying growth in artificial intelligence applications to automatically derive forest properties of interest from these datasets. Many studies use their own data at small spatio-temporal scales, and demonstrate an application of an existing or adapted data science method for a particular task. This approach often involves intensive and time-consuming data collection and processing, but generates results restricted to specific ecosystems and sensor types. There is a lack of widespread acknowledgement of how the types and structures of data used affects performance and accuracy of analysis algorithms. To accelerate progress in the field more efficiently, benchmarking datasets upon which methods can be tested and compared are sorely needed. Here, we discuss how lack of standardisation impacts confidence in estimation of key forest properties, and how considerations of data collection need to be accounted for in assessing method performance. We present pragmatic requirements and considerations for the creation of rigorous, useful benchmarking datasets for forest monitoring applications, and discuss how tools from modern data science can improve use of existing data. We list a set of example large-scale datasets that could contribute to benchmarking, and present a vision for how community-driven, representative benchmarking initiatives could benefit the field.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The proliferation of unmanned aircraft systems (UAS) has caused airspace regulation authorities to examine the interoperability of these aircraft with collision avoidance systems initially designed for large transport category aircraft. Limitations in the currently mandated TCAS led the Federal Aviation Administration to commission the development of a new solution, the Airborne Collision Avoidance System X (ACAS X), designed to enable a collision avoidance capability for multiple aircraft platforms, including UAS. While prior research explored using deep reinforcement learning algorithms (DRL) for collision avoidance, DRL did not perform as well as existing solutions. This work explores the benefits of using a DRL collision avoidance system whose parameters are tuned using a surrogate optimizer. We show the use of a surrogate optimizer leads to DRL approach that can increase safety and operational viability and support future capability development for UAS collision avoidance.
translated by 谷歌翻译
Recent advances in operator learning theory have improved our knowledge about learning maps between infinite dimensional spaces. However, for large-scale engineering problems such as concurrent multiscale simulation for mechanical properties, the training cost for the current operator learning methods is very high. The article presents a thorough analysis on the mathematical underpinnings of the operator learning paradigm and proposes a kernel learning method that maps between function spaces. We first provide a survey of modern kernel and operator learning theory, as well as discuss recent results and open problems. From there, the article presents an algorithm to how we can analytically approximate the piecewise constant functions on R for operator learning. This implies the potential feasibility of success of neural operators on clustered functions. Finally, a k-means clustered domain on the basis of a mechanistic response is considered and the Lippmann-Schwinger equation for micro-mechanical homogenization is solved. The article briefly discusses the mathematics of previous kernel learning methods and some preliminary results with those methods. The proposed kernel operator learning method uses graph kernel networks to come up with a mechanistic reduced order method for multiscale homogenization.
translated by 谷歌翻译
We propose KnowGL, a tool that allows converting text into structured relational data represented as a set of ABox assertions compliant with the TBox of a given Knowledge Graph (KG), such as Wikidata. We address this problem as a sequence generation task by leveraging pre-trained sequence-to-sequence language models, e.g. BART. Given a sentence, we fine-tune such models to detect pairs of entity mentions and jointly generate a set of facts consisting of the full set of semantic annotations for a KG, such as entity labels, entity types, and their relationships. To showcase the capabilities of our tool, we build a web application consisting of a set of UI widgets that help users to navigate through the semantic data extracted from a given input text. We make the KnowGL model available at https://huggingface.co/ibm/knowgl-large.
translated by 谷歌翻译
我们研究一种在线线性编程(OLP)问题,该问题通过随机输入最大化目标函数。当随机输入遵循一些I.I.D分布时,对分析此类OLP的各种算法的性能进行了充分的研究。要问的两个核心问题是:(i)算法如果随机输入不是I.I.D而是静止的,并且(ii)如果我们知道随机输入是潮流的,那么我们如何修改我们的算法,因此,该算法可以达到相同的效率。固定。我们通过分析再生类型的输入类型来回答第一个问题,并表明两种流行算法的遗憾与其I.I.D对应物相同的顺序界定。我们讨论了线性增长的输入的背景下的第二个问题,并提出了两种趋势自适应算法。我们提供数值仿真,以说明在再生和时尚输入下算法的性能。
translated by 谷歌翻译
已经开发了各种方法来结合多组结果的推理,以在集合和共识聚类文献中进行无监督的聚类。从几个候选聚类模型中的一个“最佳”模型报告结果的方法通常忽略了由模型选择产生的不确定性,并且导致对所选择的特定模型和参数敏感的推论,以及制作的假设,尤其是在小样本中所做的假设。尺寸或小簇尺寸。贝叶斯模型平均(BMA)是一种在多种模型中结合结果的流行方法,这些模型在这种情况下提供了一些有吸引力的好处,包括对组合集群结构的概率解释和基于模型的不确定性的量化。在这项工作中,我们介绍了ClusterBMA,该方法可以通过多种无监督聚类算法进行加权模型平均。我们将聚类内部验证标准的组合用作后验模型概率的新近似值,以加权每个模型的结果。从代表跨模型的聚类溶液的加权平均值的组合后相似性矩阵,我们应用对称的单纯形矩阵分解来计算最终的概率群集分配。此方法在随附的R软件包中实现。我们通过案例研究探索这种方法的性能,该案例研究旨在根据脑电图(EEG)数据识别个体的概率簇。我们还使用仿真数据集探索所提出的技术识别稳健的集成簇具有不同级别的集成簇,并在子组之间的分离水平变化,并且模型之间的簇数量变化。
translated by 谷歌翻译
土著非洲语言在人工智能中被归类为服务不足,并且数字包容性和信息获取差。挑战是如何在没有必要数据的情况下使用机器学习和深度学习模型。 Kencorpus是一种肯尼亚语言语料库,打算弥合有关如何收集和存储文本和语音数据的差距,足以启用数据驱动的解决方案,例如机器翻译,多语言社区中的问题回答和转录。 Kencorpus是一种主要在肯尼亚说的三种语言的语料库(文本和语音):斯瓦希里语,Dholuo和Luhya(方言Lumarachi,Lulogooli和Lubukusu)。该语料库打算填补开发数据集的空白,该数据集可用于低资源语言的自然语言处理和机器学习任务。这些语言中的每一种都为语言语料库贡献了文本和语音数据。数据收集是由社区,学校和合作伙伴(媒体,出版商)的研究人员完成的。 Kencorpus有5,594个项目的集合,为4,442个文本(560万字)和1,152个语音文件(177小时)。基于这些数据,还开发了其他数据集,例如Dholuo和Luhya的POS标记集(分别为50,000和93,000个单词),来自Swahili文本(7,537 QA对)的问答对,以及将文本转换为Swahili(12,400句子)。数据集可用于机器学习任务,例如文本处理,注释和翻译。该项目还在QA任务的文本和机器学习语音和机器学习中为概念系统提供了证明,最初的结果证实了Kencorpus对机器学习社区的可用性。 Kencorpus是这些低资源语言的第一个此类语料库,并且是学习和共享类似作品的经验的基础。
translated by 谷歌翻译